Toward Incorporation of Relevant Documents in word2vec

نویسندگان

  • Navid Rekabsaz
  • Bhaskar Mitra
  • Mihai Lupu
  • Allan Hanbury
چکیده

Recent advances in neural word embedding provide significant benefit to various information retrieval tasks. However as shown by recent studies, adapting the embedding models for the needs of IR tasks can bring considerable further improvements. The embedding models in general define the term relatedness by exploiting the terms’ co-occurrences in short-window contexts. An alternative (and well-studied) approach in IR for related terms to a query is using local information i.e. a set of top-retrieved documents. In view of these two methods of term relatedness, in this work, we report our study on incorporating the local information of the query in the word embeddings. One main challenge in this direction is that the dense vectors of word embeddings and their estimation of term-to-term relatedness remain difficult to interpret and hard to analyze. As an alternative, explicit word representations propose vectors whose dimensions are easily interpretable, and recent methods show competitive performance to the dense vectors. We introduce a neuralbased explicit representation, rooted in the conceptual ideas of the word2vec Skip-Gram model. The method provides interpretable explicit vectors while keeping the effectiveness of the Skip-Gram model. The evaluation of various explicit representations on word association collections shows that the newly proposed method outperforms the state-of-the-art explicit representations when tasked with ranking highly similar terms. Based on the introduced explicit representation, we discuss our approaches on integrating local documents in globally-trained embedding models and discuss the preliminary results.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Corpus specificity in LSA and Word2vec: the role of out-of-domain documents

Latent Semantic Analysis (LSA) and Word2vec are some of the most widely used word embeddings. Despite the popularity of these techniques, the precise mechanisms by which they acquire new semantic relations between words remain unclear. In the present article we investigate whether LSA and Word2vec capacity to identify relevant semantic dimensions increases with size of corpus. One intuitive hyp...

متن کامل

ECNU at 2017 eHealth Task 2: Technologically Assisted Reviews in Empirical Medicine

The 2017 CLEF eHeath Task2 requires to rank the retrieval results given by medical database. The purpose is to reduce efforts that experts devote to finding indeed relevant documents. We utilize a customized Learning-to-Rank model to re-rank the retrieval result. Additionally, we adopt word2vec to represent queries and documents and compute the relevant score by cosine distance. We find that th...

متن کامل

Ambient Search: A Document Retrieval System for Speech Streams

We present Ambient Search, an open source system for displaying and retrieving relevant documents in real time for speech input. The system works ambiently, that is, it unobstructively listens to speech streams in the background, identifies keywords and keyphrases for query construction and continuously serves relevant documents from its index. Query terms are ranked with Word2Vec and TF-IDF an...

متن کامل

Content Based Document Recommender using Deep Learning

With the recent advancements in information technology there has been a huge surge in amount of data available. But information retrieval technology has not been able to keep up with this pace of information generation resulting in over spending of time for retrieving relevant information. Even though systems exist for assisting users to search a database along with filtering and recommending r...

متن کامل

Multilingual Vector Representations of Words, Sentences, and Documents

Neural vector representations are now ubiquitous in all subfields of natural language processing and text mining. While methods such as word2vec and GloVe are wellknown, multilingual and cross-lingual vector representations have also become important. In particular, such representations can not only describe words, but also of entire sentences and documents as well.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1707.06598  شماره 

صفحات  -

تاریخ انتشار 2017